1,460 research outputs found

    Combatting Torture: Revitalizing the Toscanino Exception to the Ker-Frisbie Doctrine

    Get PDF

    Hydrography with ER-type Raydist

    Get PDF

    Probe set algorithms: is there a rational best bet?

    Get PDF
    Affymetrix microarrays have become a standard experimental platform for studies of mRNA expression profiling. Their success is due, in part, to the multiple oligonucleotide features (probes) against each transcript (probe set). This multiple testing allows for more robust background assessments and gene expression measures, and has permitted the development of many computational methods to translate image data into a single normalized "signal" for mRNA transcript abundance. There are now many probe set algorithms that have been developed, with a gradual movement away from chip-by-chip methods (MAS5), to project-based model-fitting methods (dCHIP, RMA, others). Data interpretation is often profoundly changed by choice of algorithm, with disoriented biologists questioning what the "accurate" interpretation of their experiment is. Here, we summarize the debate concerning probe set algorithms. We provide examples of how changes in mismatch weight, normalizations, and construction of expression ratios each dramatically change data interpretation. All interpretations can be considered as computationally appropriate, but with varying biological credibility. We also illustrate the performance of two new hybrid algorithms (PLIER, GC-RMA) relative to more traditional algorithms (dCHIP, MAS5, Probe Profiler PCA, RMA) using an interactive power analysis tool. PLIER appears superior to other algorithms in avoiding false positives with poorly performing probe sets. Based on our interpretation of the literature, and examples presented here, we suggest that the variability in performance of probe set algorithms is more dependent upon assumptions regarding "background", than on calculations of "signal". We argue that "background" is an enormously complex variable that can only be vaguely quantified, and thus the "best" probe set algorithm will vary from project to project

    Modelling the spatial distribution of DEM Error

    Get PDF
    Assessment of a DEM’s quality is usually undertaken by deriving a measure of DEM accuracy – how close the DEM’s elevation values are to the true elevation. Measures such as Root Mean Squared Error and standard deviation of the error are frequently used. These measures summarise elevation errors in a DEM as a single value. A more detailed description of DEM accuracy would allow better understanding of DEM quality and the consequent uncertainty associated with using DEMs in analytical applications. The research presented addresses the limitations of using a single root mean squared error (RMSE) value to represent the uncertainty associated with a DEM by developing a new technique for creating a spatially distributed model of DEM quality – an accuracy surface. The technique is based on the hypothesis that the distribution and scale of elevation error within a DEM are at least partly related to morphometric characteristics of the terrain. The technique involves generating a set of terrain parameters to characterise terrain morphometry and developing regression models to define the relationship between DEM error and morphometric character. The regression models form the basis for creating standard deviation surfaces to represent DEM accuracy. The hypothesis is shown to be true and reliable accuracy surfaces are successfully created. These accuracy surfaces provide more detailed information about DEM accuracy than a single global estimate of RMSE

    Canopy nitrogen, carbon assimilation, and albedo in temperate and boreal forests: Functional relations and potential climate feedbacks

    Get PDF
    The availability of nitrogen represents a key constraint on carbon cycling in terrestrial ecosystems, and it is largely in this capacity that the role of N in the Earth\u27s climate system has been considered. Despite this, few studies have included continuous variation in plant N status as a driver of broad-scale carbon cycle analyses. This is partly because of uncertainties in how leaf-level physiological relationships scale to whole ecosystems and because methods for regional to continental detection of plant N concentrations have yet to be developed. Here, we show that ecosystem CO2 uptake capacity in temperate and boreal forests scales directly with whole-canopy N concentrations, mirroring a leaf-level trend that has been observed for woody plants worldwide. We further show that both CO2 uptake capacity and canopy N concentration are strongly and positively correlated with shortwave surface albedo. These results suggest that N plays an additional, and overlooked, role in the climate system via its influence on vegetation reflectivity and shortwave surface energy exchange. We also demonstrate that much of the spatial variation in canopy N can be detected by using broad-band satellite sensors, offering a means through which these findings can be applied toward improved application of coupled carbon cycle–climate models

    Determining gene expression on a single pair of microarrays

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>In microarray experiments the numbers of replicates are often limited due to factors such as cost, availability of sample or poor hybridization. There are currently few choices for the analysis of a pair of microarrays where N = 1 in each condition. In this paper, we demonstrate the effectiveness of a new algorithm called PINC (PINC is Not Cyber-T) that can analyze Affymetrix microarray experiments.</p> <p>Results</p> <p>PINC treats each pair of probes within a probeset as an independent measure of gene expression using the Bayesian framework of the Cyber-T algorithm and then assigns a corrected p-value for each gene comparison.</p> <p>The p-values generated by PINC accurately control False Discovery rate on Affymetrix control data sets, but are small enough that family-wise error rates (such as the Holm's step down method) can be used as a conservative alternative to false discovery rate with little loss of sensitivity on control data sets.</p> <p>Conclusion</p> <p>PINC outperforms previously published methods for determining differentially expressed genes when comparing Affymetrix microarrays with N = 1 in each condition. When applied to biological samples, PINC can be used to assess the degree of variability observed among biological replicates in addition to analyzing isolated pairs of microarrays.</p

    Physico-chemical foundations underpinning microarray and next-generation sequencing experiments

    Get PDF
    Hybridization of nucleic acids on solid surfaces is a key process involved in high-throughput technologies such as microarrays and, in some cases, next-generation sequencing (NGS). A physical understanding of the hybridization process helps to determine the accuracy of these technologies. The goal of a widespread research program is to develop reliable transformations between the raw signals reported by the technologies and individual molecular concentrations from an ensemble of nucleic acids. This research has inputs from many areas, from bioinformatics and biostatistics, to theoretical and experimental biochemistry and biophysics, to computer simulations. A group of leading researchers met in Ploen Germany in 2011 to discuss present knowledge and limitations of our physico-chemical understanding of high-throughput nucleic acid technologies. This meeting inspired us to write this summary, which provides an overview of the state-of-the-art approaches based on physico-chemical foundation to modeling of the nucleic acids hybridization process on solid surfaces. In addition, practical application of current knowledge is emphasized
    corecore